This document is the preregistration for the Preferential Physics project on Lookit. Data collection is currently underway, but sample size has been set prior to data collection kickoff (50 participants reaching 12th session), and the vast majority of videos haven’t even been coded for looks, much less critical DVs calculated.
This document contains a brief orientation to the project, links to relevant other documentation, and a reproducible set of analyses that reads in a set of pilot data, generates a simulated dataset, and attempts to set up the model comparisons we actually care about!
The idea is to use dense sampling of individual infants on Lookit to conduct a detailed assessment of understanding of several physical principles. How stable are individual components across sessions and how independent? What does partial knowledge look like at the individual level?
We are interested in infants’ preferential looking ratio to simple violations of:
Gravity: Completely unsupported objects should fall down immediately, rather than moving up, continuing in their current trajectory, or moving down at some delay.
Inertia: Objects should continue roughly in their current trajectory when gravity is not a factor, rather than stopping and starting or turning around.
Support: which of the following should fall (vs. stay put) after being placed?
An object placed mostly on the anchor
An object placed only slightly on the anchor
An object touching the side or bottom of the anchor
An object near the anchor but not touching it
Why study individual behavior in more detail–what do we lose in studying groups of kids? We don’t know the extent to which a success means that a significant fraction of kids this age can do this task VS that all kids of this age can do this task to a significant degree. And even looking at the distribution of scores doesn’t clear this up, unless it’s an incredibly strong result (e.g. all kids get 9-10 of 10 questions right, or no kids get more than 6 out of 10). This matters for
understanding how abilities are related to each other: we can’t get nearly as much out of age-based progressions without knowing how the noise works—especially for results of the form “n-month-olds, but not m-month-olds, can do X”
understanding what partial knowledge & mechanisms of change in a domain look like: when kids “fail,” is that some kids making a correct prediction and some making an incorrect prediction? Or are they all failing to make any prediction and/or predicting at chance? When kids succeed but not at ceiling, are some getting one aspect and some another?
It may be that some kids don’t express nearly-universal knowledge on the dependent measures we collect. We can evaluate this explanation for noise and/or individual differences by studying the methods themselves, and kids’ behavior on them: e.g., can we predict the types of preferential looking responses we get from kids based on control tasks? How stable are those controls and task performance across sessions? (Especially interesting would be differences within kids in expression: kids may genuinely express knowledge at some times, but not others, due to attentional/emotional state changes.)
Especially in development, where we’re interested in the underpinnings of human cognition, the difference between “some babies use this type of information but others do something else” and “all babies have this type of information available to them, unless something’s wrong” matters a lot—this is exactly where we care about universality.
Children complete 24 20-second preferential looking trials per session; families are encouraged to complete 15 sessions within 2 months. Parents complete a short mood survey and go through some instructions before the preferential looking portion. They’re asked to hold their children looking over their shoulders during this portion to avoid parental bias. Parents can end the study at any point and skip to the post-study survey.
Each trial begins with an object intro (video of Kim saying “Look, this is a …”) and demonstrating use of an object – e.g. biting into an apple, putting on hand lotion, drawing with a marker, eating with a spoon) that lasts about 5 seconds.
This is intended as an attention-getter to re-orient children towards the center, while in principle reinforcing that the object in question is not an agent and should be expected to follow normal physical laws.
Then, two events involving that object are shown simultaneously, one on the left and one on the right, looping for 24s (event videos range from about 2-5s). Events always show the same object, same camera angle, same background, with a difference only in the “outcome.” Event types are shown in the table below; each concept is presented 4-6 times, with 2-3 repetitions of each event type. Although events are short, they loop continuously for 20s. Realtime events are shown so that “expected” events are at natural speeds, and not potentially seen as violating physical principles due to happening too slowly.
Parents can pause individual trials. If they pause during the intro, they just start over upon restarting. If they pause during the test (up to one time per trial) they restart from the intro, but then the left and right test videos are switched for the test phase.
(See below for descriptions of these trial types.) Trials cycle through gravity, inertia+calibration, support, and control (same/salience) pairings during a session; the order of these concepts is chosen from a list and changed (cycling through a list of orders) each session. There are six videos shown in each category in total.
Within each category, objects are assigned to comparison types (e.g. “apple” assigned to “table, down vs. up”) by choosing from a list of acceptable mappings, again incremented per session. (The first session value was selected randomly from the first six options initially [by accident], and is now selected from all possible mappings.)
There are six possible comparisons for the stay and fall events; three comparisons are assigned to stay and three to fall, with the selection again cycling through a random list of such assignments per session. Left/right placement, horizontal flipping of the left and right events, camera angles, and backgrounds are chosen randomly with the constraint that half of the ‘more probable’ events are on the left within each category. Calibration trials (grouped with inertia videos for purposes of assigning object intros) are placed at trials 3 and 6, so that they are always available for kids who completed enough trials for the session to be included (and so that if there are differences in coding quality across trials, we’re not excluding on the basis of when calibration happened).
Object is rolled/slid off a table and continues down, horizontal, or up.
Object in hand is tossed down, falls UP, tossed up, falls DOWN.
Object is placed in center of ramp and released to roll down or up.
Object rolls from one side of screen and stops in the middle, then re-starts on its own, or by a hand.
Object rolls/slides from one side of the screen and collides with a barrier, or takes the same trajectory colliding against no bairrier.
An object is placed (mostly on/slightly on/next to/near) on a cabinet and immediately falls.
An object is placed (mostly on/slightly on/next to/near) on a cabinet and stays there.
Distinguishable but similar physically-possible human actions on objects, like rotating an object about one axis vs. another
Physically-possible human actions on objects, some more interesting, like flipping a spoon vs. slowly extending it or erasing a drawing vs. an empty board.